Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
translated by 谷歌翻译
As language models have grown in parameters and layers, it has become much harder to train and infer with them on single GPUs. This is severely restricting the availability of large language models such as GPT-3, BERT-Large, and many others. A common technique to solve this problem is pruning the network architecture by removing transformer heads, fully-connected weights, and other modules. The main challenge is to discern the important parameters from the less important ones. Our goal is to find strong metrics for identifying such parameters. We thus propose two strategies: Cam-Cut based on the GradCAM interpretations, and Smooth-Cut based on the SmoothGrad, for calculating the importance scores. Through this work, we show that our scoring functions are able to assign more relevant task-based scores to the network parameters, and thus both our pruning approaches significantly outperform the standard weight and gradient-based strategies, especially at higher compression ratios in BERT-based models. We also analyze our pruning masks and find them to be significantly different from the ones obtained using standard metrics.
translated by 谷歌翻译
我们旨在通过引入全面的分布式深度学习(DDL)探索器来解决此问题,该研究人员可以确定DDL在公共云上运行时遭受的各种执行“失速”。我们已经通过扩展先前的工作来估算两种类型的通信失速 - 互连和网络摊位来实现剖面。我们使用Profiler培训流行的DNN模型来表征各种AWS GPU实例,并列出了用户做出明智决定的优势和缺点。我们观察到,较昂贵的GPU实例可能不是所有DNN型号的性能最多,并且AWS可能会在次优的硬件互连资源分配次优。具体而言,与单个实例的培训相比,机内互连可以引入高达90%的DNN培训时间和网络连接的实例的通信开销,而与网络连接的实例可能会遭受高达5倍的速度。此外,我们对DNN宏观特征的影响进行建模,例如层的数量和通信摊位上的梯度数量。最后,我们为用户提出了一个基于衡量的建议模型,以降低DDL的公共云货币成本。
translated by 谷歌翻译
在当今的现代数字世界中,我们有许多在线问答平台,例如Stack Exchange,Quora和GFG,它们是人们交流和互相帮助的媒介。在本文中,我们分析了堆栈溢出在帮助新手进行编程方面的有效性。该平台上的每个用户都会经历旅程。在最初的12个月中,我们认为它们是新手。在12个月后,他们属于以下类别之一:经验丰富,潜伏或好奇。每个问题都有分配给它的标签,我们观察到具有某些特定标签的问题的响应时间更快,表明该领域的活跃社区比其他领域的社区。该平台截至2013年开始稳定增长,之后它开始下降,但是最近在2020年大流行期间,我们可以在平台上看到恢复活力的活动。
translated by 谷歌翻译
在本文中,我们使用语言数据收集的现场方法讨论了四种低资源印度语语言的演讲语料库的过程中的工作 - Awadhi,Bhojpuri,Braj和Magahi。目前,语料库的总大小约为18小时(每种语言约4-5小时),并用语法信息进行转录和注释,例如词性标签,形态学特征和普遍的依赖关系。我们讨论了以这些语言收集数据的方法,其中大多数是在Covid-19大流行中心进行的,其中之一是为低收入群体带来一些额外的收入,说这些语言。在本文中,我们还讨论了这些语言中自动语音识别系统的基线实验的结果。
translated by 谷歌翻译
这项工作为2022年ICML表达性发声挑战exvo-multitask轨道的人声爆发音频介绍了对年龄,原产国和情感的同时估计的多任务方法。选择的方法利用了光谱 - 周期调制和自我监督的特征的组合,然后是在多任务范式中组织的编码器编码网络。我们通过检查独立的任务特定模型和联合模型来评估所构成的任务之间的互补性,并探索不同特征集的相对强度。我们还引入了一种简单的分数融合机制,以利用此任务的不同特征集的互补性。我们发现,与光谱 - 周期性接收场的得分融合结合进行了强大的数据预处理,而Hubert模型达到了我们最佳的EXVO-Multitask测试评分为0.412。
translated by 谷歌翻译
Relu完全连接的网络普遍存在但无法诠释,因为它们适用于多层结构的分段线性函数和模型重量的复杂相互作用。本文采用了一种新的方法来通过在各个件(零件)上的设定操作来实现分段。这是通过近似规范正常形式并使用所得到的模型来完成的。这提供了特殊的优点(a)对拟合功能的参数的强对应关系(高可解释性); (b)能够符合连续功能的任何组合作为分段功能(易于设计); (c)在域的目标区域(有针对性学习)中添加新的非线性的能力; (d)避免分层的等式的简单性。它也可以在分段线性函数的总体Max-min表示中表达,这具有理论上的缓解和可信度。在模拟的回归和分类任务和基准数据集上测试了该架构,包括UCI数据集,MNIST,FMNIST和CIFAR 10。此性能与完全连接的架构相同。它可以找到各种应用,其中必须由可解释层替换完全连接的图层。
translated by 谷歌翻译
自从2020年的Covid-19流行病发作以来,数百万人屈服于这种致命的病毒。已经制定了许多尝试来设计一种可以检测到病毒的自动测试方法。全球各地的研究人员提出了基于深度学习的方法,以使用胸部X射线检测Covid-19。但是,在大多数研究人员使用的公开胸部X射线数据集中,已经提出了问题。在本文中,我们提出了一个2分阶段的方法来解决这个主题问题。已经进行了两个实验作为在数据集中出现偏置存在的方法的第1阶段的一部分。随后,已经提出了在方法的第2阶段中提出了一种图像分割,超分辨率和基于CNN的流水线以及不同的图像增强技术,以减少偏置的效果。 InceptionResNetv2在胸部X射线图像上培训,随着直方图均衡而增强,其次通过阶段2中提出的管道时γ校正,为3级(正常,肺炎和Covid-19)分类产生了90.47%的最高精度任务。
translated by 谷歌翻译
Our paper aims to analyze political polarization in US political system using Language Models, and thereby help candidates make an informed decision. The availability of this information will help voters understand their candidates views on the economy, healthcare, education and other social issues. Our main contributions are a dataset extracted from Wikipedia that spans the past 120 years and a Language model based method that helps analyze how polarized a candidate is. Our data is divided into 2 parts, background information and political information about a candidate, since our hypothesis is that the political views of a candidate should be based on reason and be independent of factors such as birthplace, alma mater, etc. We further split this data into 4 phases chronologically, to help understand if and how the polarization amongst candidates changes. This data has been cleaned to remove biases. To understand the polarization we begin by showing results from some classical language models in Word2Vec and Doc2Vec. And then use more powerful techniques like the Longformer, a transformer based encoder, to assimilate more information and find the nearest neighbors of each candidate based on their political view and their background.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译